This is a talk given at the Redwood Center for Theoretical Neuroscience, UC Berkeley on November 28, 2006. Speaker is Thomas Dean from Brown University and Google.
Talk announcement.
Title: Learning Invariant Features Using Inertial Priors, or "Why Google might want to be in the neocortex business?"
Abstract: We address the technical challenges involved in combining key features from several theories of the visual cortex in a single computational model. The resulting model is a hierarchical Bayesian network factored into modular component networks implementing variable-order Markov models. Each component network has an associated receptive field corresponding to components in the level directly below it in the hierarchy. The variable-order Markov models account for features that are invariant to naturally occurring transformations in their inputs. These invariant features support efficient generalization and produce increasingly stable, persistent representations as we ascend the hierarchy. The receptive fields of proximate components on the same level overlap to restore selectivity that might otherwise be lost to invariance. Technical jargon aside, we believe there is enough known about the primate cortex to enable engineers to build systems that approach the pattern-recognition capability of human vision. Moreover, we believe that such a capability can be implemented using the distributed computing infrastructure that Google has today.
Note: A PDF file containing slides for the talk is available through the "Download - All files" link on the left of this page, or try clicking
here.